36 research outputs found

    Decomposition-based mission planning for fixed-wing UAVs surveying in wind

    Get PDF
    This paper presents a new method for planning fixed-wing aerial survey paths that ensures efficient image coverage of a large complex agricultural field in the presence of wind. By decomposing any complex polygonal field into multiple convex polygons, the traditional back-and-forth boustrophedon paths can be used to ensure coverage of these decomposed regions. To decompose a complex field in an efficient and fast manner, a top-down recursive greedy approach is used to traverse the search space in order to minimise flight time of the survey. This optimisation can be computed fast enough for use in the field. As wind can severely affect flight time, it is included in the flight time calculation in a systematic way using a verified cost function that offer greatly reduced survey times in wind. Other improved cost functions have been developed to take into account real world problems, e.g. No Fly Zones, in addition to flight time. A number of real surveys are performed in order to show the flight time in wind model is accurate, to make further comparisons to previous techniques and to show that the proposed method works in real-world conditions providing total image coverage. A number of missions are generated and flown for real complex agricultural fields. In addition to this, the wind field around a survey area is measured from a multi-rotor carrying an ultrasonic wind speed sensor. This shows that the assumption of steady uniform wind holds true for the small areas and time scales of a Unmanned Aerial Vehicle (UAV) aerial survey.</div

    Vehicle localization by lidar point correlation improved by change detection

    Get PDF
    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany

    Orientation of oblique airborne image sets - Experiences from the ISPRS/Eurosdr benchmark on multi-platform photogrammetry

    Get PDF
    During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto objects is clearly a non-solved research problem so far. In our experiments we also confirm that the obtainable height accuracy is better when all images are used in bundle block adjustment. This was also shown in other research before and is confirmed here. Not surprisingly, the large overlap of 80/80% provides much better object space accuracy – random errors seem to be about 2-3fold smaller compared to the 60/60% overlap. A comparison of different software approaches shows that newly emerged commercial packages, initially intended to work with small frame image blocks, do perform very well

    Pléiades project: Assessment of georeferencing accuracy, image quality, pansharpening performence and DSM/DTM quality

    Get PDF
    PlĂ©iades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly PlĂ©iades Users Group) program to validate PlĂ©iades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBÄ°TAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) PlĂ©iades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of PlĂ©iades images. The pansharpened images were generated by various methods, and are validated by most common statistical metrics and also visual interpretation. The generated DSM and DTM were achieved with ±1.6m standard deviation in Z (SZ) in relation to a reference DTM.Airbus Defence and SpaceBEU/2014-47912266-01TÜBÄ°TAK/114Y38

    Internet of Things in Agricultural Innovation and Security

    Get PDF
    The agricultural Internet of Things (Ag-IoT) paradigm has tremendous potential in transparent integration of underground soil sensing, farm machinery, and sensor-guided irrigation systems with the complex social network of growers, agronomists, crop consultants, and advisors. The aim of the IoT in agricultural innovation and security chapter is to present agricultural IoT research and paradigm to promote sustainable production of safe, healthy, and profitable crop and animal agricultural products. This chapter covers the IoT platform to test optimized management strategies, engage farmer and industry groups, and investigate new and traditional technology drivers that will enhance resilience of the farmers to the socio-environmental changes. A review of state-of-the-art communication architectures and underlying sensing technologies and communication mechanisms is presented with coverage of recent advances in the theory and applications of wireless underground communications. Major challenges in Ag-IoT design and implementation are also discussed

    UAS-based automatic bird count of a common gull colony

    No full text
    The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds

    Crop height determination with UAS point clouds

    No full text
    The accurate determination of the height of agricultural crops helps to predict yield, biomass etc. These relationships are of great importance not only for crop production but also in grassland management, because the available biomass and food quality are valuable information. However there is no cost efficient and automatic system for the determination of the crop height available. 3D-point clouds generated from high resolution UAS imagery offer a new alternative. Two different approaches for crop height determination are presented. The "difference method" were the canopy height is determined by taking the difference between a current UAS-surface model and an existing digital terrain model (DTM) is the most suited and most accurate method. In situ measurements, vegetation indices and yield observations correlate well with the determined UAS crop heights
    corecore